Goto

Collaborating Authors

 ethical risk


ff4ERA: A new Fuzzy Framework for Ethical Risk Assessment in AI

Dyoub, Abeer, Letteri, Ivan, Lisi, Francesca A.

arXiv.org Artificial Intelligence

The emergence of Symbiotic AI (SAI) introduces new challenges to ethical decision-making as it deepens human-AI collaboration. As symbiosis grows, AI systems pose greater ethical risks, including harm to human rights and trust. Ethical Risk Assessment (ERA) thus becomes crucial for guiding decisions that minimize such risks. However, ERA is hindered by uncertainty, vagueness, and incomplete information, and morality itself is context-dependent and imprecise. This motivates the need for a flexible, transparent, yet robust framework for ERA. Our work supports ethical decision-making by quantitatively assessing and prioritizing multiple ethical risks so that artificial agents can select actions aligned with human values and acceptable risk levels. We introduce ff4ERA, a fuzzy framework that integrates Fuzzy Logic, the Fuzzy Analytic Hierarchy Process (FAHP), and Certainty Factors (CF) to quantify ethical risks via an Ethical Risk Score (ERS) for each risk type. The final ERS combines the FAHP-derived weight, propagated CF, and risk level. The framework offers a robust mathematical approach for collaborative ERA modeling and systematic, step-by-step analysis. A case study confirms that ff4ERA yields context-sensitive, ethically meaningful risk scores reflecting both expert input and sensor-based evidence. Risk scores vary consistently with relevant factors while remaining robust to unrelated inputs. Local sensitivity analysis shows predictable, mostly monotonic behavior across perturbations, and global Sobol analysis highlights the dominant influence of expert-defined weights and certainty factors, validating the model design. Overall, the results demonstrate ff4ERA ability to produce interpretable, traceable, and risk-aware ethical assessments, enabling what-if analyses and guiding designers in calibrating membership functions and expert judgments for reliable ethical decision support.


A Fuzzy Approach to the Specification, Verification and Validation of Risk-Based Ethical Decision Making Models

Dyoub, Abeer, Lisi, Francesca A.

arXiv.org Artificial Intelligence

The ontological and epistemic complexities inherent in the moral domain make it challenging to establish clear standards for evaluating the performance of a moral machine. In this paper, we present a formal method to describe Ethical Decision Making models based on ethical risk assessment. Then, we show how these models that are specified as fuzzy rules can be verified and validated using fuzzy Petri nets. A case study from the medical field is considered to illustrate the proposed approach.


Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts

Sankaran, Sridharan

arXiv.org Artificial Intelligence

As artificial intelligence (AI) reshapes industries and societies, ensuring its trustworthiness -- through mitigating ethical risks like bias, opacity, and accountability deficits -- remains a global challenge. International Organization for Standardization (ISO) AI standards, such as ISO/IEC 24027 and 24368, aim to foster responsible development by embedding fairness, transparency, and risk management into AI systems. However, their effectiveness varies across diverse regulatory landscapes, from the EU' s risk - based AI Act to China's stability - focused measures and the U.S.'s fragmented state - led initiatives. This paper introduces a novel Comparative Risk - Impact Assessment Framework to evaluate how well ISO standards address ethical risks within these cont exts, proposing enhancements to strengthen their global applicability. By mapping ISO standards to the EU AI Act and surveying regulatory frameworks in ten regions -- including the UK, Canada, India, Japan, Singapore, South Korea, and Brazil -- we establish a ba seline for ethical alignment. The framework, applied to case studies in the EU, US - Colorado, and China, reveals gaps: voluntary ISO standards falter in enforcement (e.g., Colorado) and undervalue region - specific risks like privacy (China). We recommend man datory risk audits, region - specific annexes, and a privacy - focused module to enhance ISO's adaptability. This approach not only synthesizes global trends but also offers a replicable tool for aligning standardization with ethical imperatives, fostering int eroperability and trust in AI worldwide. Policymakers and standards bodies can leverage these insights to evolve AI governance, ensuring it meets diverse societal needs as the technology advances .


Making Intelligence: Ethical Values in IQ and ML Benchmarks

Blili-Hamelin, Borhane, Hancox-Li, Leif

arXiv.org Artificial Intelligence

In recent years, ML researchers have wrestled with defining and improving machine learning (ML) benchmarks and datasets. In parallel, some have trained a critical lens on the ethics of dataset creation and ML research. In this position paper, we highlight the entanglement of ethics with seemingly ``technical'' or ``scientific'' decisions about the design of ML benchmarks. Our starting point is the existence of multiple overlooked structural similarities between human intelligence benchmarks and ML benchmarks. Both types of benchmarks set standards for describing, evaluating, and comparing performance on tasks relevant to intelligence -- standards that many scholars of human intelligence have long recognized as value-laden. We use perspectives from feminist philosophy of science on IQ benchmarks and thick concepts in social science to argue that values need to be considered and documented when creating ML benchmarks. It is neither possible nor desirable to avoid this choice by creating value-neutral benchmarks. Finally, we outline practical recommendations for ML benchmark research ethics and ethics review.


Concrete Safety for ML Problems: System Safety for ML Development and Assessment

Jatho, Edgar W., Mailloux, Logan O., Williams, Eugene D., McClure, Patrick, Kroll, Joshua A.

arXiv.org Artificial Intelligence

Many stakeholders struggle to make reliances on ML-driven systems due to the risk of harm these systems may cause. Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements. Moreover, such risks in complex ML-driven systems present a special challenge as they are often difficult to foresee, arising over periods of time, across populations, and at scale. These risks often arise not from poor ML development decisions or low performance directly but rather emerge through the interactions amongst ML development choices, the context of model use, environmental factors, and the effects of a model on its target. Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems. In this work, we apply a state-of-the-art systems safety approach to concrete applications of ML with notable social and ethical risks to demonstrate a systematic means for meeting the assurance requirements needed to argue for safe and trustworthy ML in sociotechnical systems.


Your AI may not be ethical. Here's what to do

#artificialintelligence

The people you manage may not be ethical and you therefore need guardrails to protect yourself and your organization. But the same is true of the machines you manage – the apps and algorithms of artificial intelligence that act on your behalf. That's harder to get your head around and probably trickier to control. "When it comes to AI, there are loads of ethical risks that need mitigating," consultant Reid Blackman writes in Ethical Machines. He points to an Uber self-driving car that killed a woman, the investigation of Goldman Sachs for creating AI that set credit card limits lower for women than men, and Amazon abandoning its resume-reading AI after two years because the company couldn't figure out how to stop it from discriminating against women.


System Safety Engineering for Social and Ethical ML Risks: A Case Study

Jatho, Edgar W. III, Mailloux, Logan O., Rismani, Shalaleh, Williams, Eugene D., Kroll, Joshua A.

arXiv.org Artificial Intelligence

Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems, with a particular focus on social and ethical risks of ML components in complex sociotechnical systems. However, existing approaches are largely disjointed, ad-hoc and of unknown effectiveness. Systems safety engineering is a well established discipline with a track record of identifying and managing risks in many complex sociotechnical domains. We adopt the natural hypothesis that tools from this domain could serve to enhance risk analyses of ML in its context of use. To test this hypothesis, we apply a "best of breed" systems safety analysis, Systems Theoretic Process Analysis (STPA), to a specific high-consequence system with an important ML-driven component, namely the Prescription Drug Monitoring Programs (PDMPs) operated by many US States, several of which rely on an ML-derived risk score. We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.


The Ethical Risks of Analyzing Crisis Events on Social Media with Machine Learning

Kraft, Angelie, Usbeck, Ricardo

arXiv.org Artificial Intelligence

Social media platforms provide a continuous stream of real-time news regarding crisis events on a global scale. Several machine learning methods utilize the crowd-sourced data for the automated detection of crises and the characterization of their precursors and aftermaths. Early detection and localization of crisis-related events can help save lives and economies. Yet, the applied automation methods introduce ethical risks worthy of investigation -- especially given their high-stakes societal context. This work identifies and critically examines ethical risk factors of social media analyses of crisis events focusing on machine learning methods. We aim to sensitize researchers and practitioners to the ethical pitfalls and promote fairer and more reliable designs.


From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML

Rismani, Shalaleh, Shelby, Renee, Smart, Andrew, Jatho, Edgar, Kroll, Joshua, Moon, AJung, Rostamzadeh, Negar

arXiv.org Artificial Intelligence

Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact -- described here as social and ethical risks -- for users, society and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices, and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide appropriate structure toward social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the ML research community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.